User loginNavigation |
Verifiable Functional Purity in Java
Verifiable Functional Purity in Java. Matthew Finifter, Adrian Mettler, Naveen Sastry, and David Wagner.
To appear at 15th ACM Conference on Computer and Communication Security (CCS 2008).
Proving that particular methods within a code base are functionally pure - deterministic and side-effect free - would aid verification of security properties including function invertibility, reproducibility of computation, and safety of untrusted code execution. Until now it has not been possible to automatically prove a method is functionally pure within a high-level imperative language in wide use such as Java. We discuss a technique to prove that methods are functionally pure by writing programs in a subset of Java called Joe-E; a static verifier ensures that programs fall within the subset. In Joe-E, pure methods can be trivially recognized from their method signature. The paper includes a nice discussion of security benefits that can stem from being able to identify pure functions (of course, it is not obvious that guarantees at the programming language level are maintained at the run time level). I am sure many here have opinions about whether it makes more sense to try to graft purity on an imperative language, exposing it as an added feature, or to move programmers to functional languages.. By Ehud Lamm at 2008-09-17 22:32 | Functional | OOP | Software Engineering | Type Theory | 1 comment | other blogs | 12844 reads
The Transactional Memory / Garbage Collection AnalogyCourtesy of my shiny new commute, I have been listing to various podcasts, including Software Engineering Radio. A while back, they had an interview with Dan Grossman on his OOPSLA 2007 paper, which I have not seen discussed here. The Transactional Memory / Garbage Collection Analogy is an essay comparing transactional memory with garbage collection based on the analogy:
Grossman presents the analogy as a word-for-word transliteration of a discussion of each of the technologies. (Hence the "fun" category.) (As an aside, Grossman does not address message-passing, but says, One point that he does make is that
The one serious weakness of the analogy, to me, is that GC does not require (much) programmer input to work, while TM does. Although some parts of the analogy are strained, there are some interesting correspondences. By Tommy McGuire at 2008-09-17 15:37 | Fun | Implementation | Parallel/Distributed | 15 comments | other blogs | 15668 reads
Twilight of the GPUThis interview with Tim Sweeney discusses his prediction that graphic rendering will move from special purpose GPUs back to the CPU: I expect that in the next generation we'll write 100 percent of our rendering code in a real programming language—not DirectX, not OpenGL, but a language like C++ or CUDA. A real programming language unconstrained by weird API restrictions. Whether that runs on NVIDIA hardware, Intel hardware or ATI hardware is really an independent question. You could potentially run it on any hardware that's capable of running general-purpose code efficiently. This is driven by the development of cheap multi-core CPUs. Consumers might still buy graphics boards, but they'll be fully programmable multi-core devices: Intel's forthcoming Larrabee product will be sold as a discrete GPU, but it is essentially a many-core processor, and there's little doubt that forthcoming Larrabee competitors from NVIDIA and ATI will be similarly programmable, even if their individual cores are simpler and more specialized. How are we going to program these devices? NVIDIA showed the data-parallel paradigm was practical with CUDA (LtU discussion). Now, Tim asks: ...can we take CUDA's restricted feature set—it doesn't support recursion or function pointers—and can we make that into a full C++ compatible language?... From my point of view, the ideal software layer is just to have a vectorizing C++ compiler for every architecture. The FP lover is me says that data-parallel + C++ = the horror! However, I can appreciate it is a practical solution to a pressing problem. What do you think? Is Tim right? Is a data-parallel extension to C++ the right way to go? Or is the next version of Unreal going to be written in FRP? First-class MacrosIn First-class Macros Have Types, POPL 2000, Alan Bawden describes a way to make macros behave as first class values.
Bawden points out that while he used a static type system...
Coinductive proof principles for stochastic processesCoinductive proof principles for stochastic processes, Dexter Kozen, Logical Methods in Computer Science 2007.
This paper has a clever little program, a clever little proof principle for it, and exploits connections to a bit of mathematics you don't normally see in PL research. A Framework for Comparing Models of Computation
A Framework for Comparing Models of Computation by Edward A. Lee and Alberto Sangiovanni-Vincentelli, 1998.
We give a denotational framework (a “meta modelâ€) within which certain properties of models of computation can be compared. It describes concurrent processes in general terms as sets of possible behaviors. A process is determinate if, given the constraints imposed by the inputs, there are exactly one or exactly zero behaviors. Compositions of processes are processes with behaviors in the intersection of the behaviors of the component processes. The interaction between processes is through signals, which are collections of events. Each event is a value-tag pair, where the tags can come from a partially ordered or totally ordered set. Timed models are where the set of tags is totally ordered. Synchronous events share the same tag, and synchronous signals contain events with the same set of tags. Synchronous processes have only synchronous signals as behaviors. Strict causality (in timed tag systems) and continuity (in untimed tag systems) ensure determinacy under certain technical conditions. The framework is used to compare certain essential features of various models of computation, including Kahn process networks, dataflow, sequential processes, concurrent sequential processes with rendezvous, Petri nets, and discrete-event systems.The generality of the approach looks very impressive. Can anyone share first-hand experience with this framework? Would be great to see E compared to Oz! By Andris Birkmanis at 2008-09-11 15:02 | Parallel/Distributed | Semantics | 5 comments | other blogs | 8958 reads
Polymorphic Algebraic Data Type ReconstructionIn Polymorphic Algebraic Data Type Reconstruction, Tom Schrijvers and Maurice Bruynooghe create some magic: in addition to normal type declaration inference, they infer ADT definitions. From the abstract:
Is the algorithm guaranteed to terminate? Well...
Clojure's Approach to Identity and StateClojure has been discussed here before. It fits in the Lisp family with S-expressions, macros, and functions as values. Like most Lisps, it's dynamically typed and impure. But its target focus is concurrency so symbols are defined immutably by default; its standard library's collection structures are immutable and persistent; and its various mutable concepts are threadsafe except, of course, for the back doors implied by I/O and JVM library interoperability. See vars, refs, and agents. What I wanted to highlight is position paper of sorts that Rich Hickey has posted on Clojure's Approach to Identity and State. An excerpt:
Hickey also addresses the choice to not follow Erlang's model.
The essay is worth a read on a couple of levels of interest to LtU. At an abstract level, it's a good example of a well-articulated design justification. Agree or not, it's clear that Hickey gave thought to his decisions. Too many language designers fall into the trap of blindly inheriting semantics from a favorite language and end up putting new lipstick on the same pig. Any language designer would do well to write an essay or two like this before jumping into their venture. At the specific level, the core approach is certainly worthy of discussion and alternative designs. Is mutable state really the best way to deal with "working models"? Are there things that the pure functional camp can do to make "working models" more convenient, e.g. do Clean's uniqueness types fit the bill at least for sequential programming, or are they too restrictive? Are there things that can make Erlang style actors less cumbersome to use especially when distribution isn't necessary? By James Iry at 2008-09-08 00:23 | Paradigms | Parallel/Distributed | 5 comments | other blogs | 21677 reads
Phil Windley's DSL adventuresPhil Windley has has a new startup, and he is documenting some of aspects of their design process (business and technical) on his blog. For us the nice part is that he is building a DSL. Here is an explanation why building a DSL makes sense (not that we need one, over here, but still a nice analysis). And here is a discussion of high order perl and parsing. Compiler Validation through Program AnalysisAnna Zaks and Amir Pnueli (2008). CoVaC: Compiler Validation by Program Analysis of the Cross-Product. In Proc. 15th Symp. Formal Methods. From the introduction:
and
|
Browse archives
Active forum topics |
Recent comments
2 weeks 2 days ago
2 weeks 3 days ago
2 weeks 4 days ago
2 weeks 4 days ago
3 weeks 2 days ago
3 weeks 2 days ago
3 weeks 2 days ago
6 weeks 3 days ago
7 weeks 1 day ago
7 weeks 1 day ago